Your browser doesn't support javascript.
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Añadir filtros

Base de datos
Tipo del documento
Intervalo de año
1.
IEEE Rev Biomed Eng ; PP2022 Jun 23.
Artículo en Inglés | MEDLINE | ID: covidwho-2230402

RESUMEN

Despite the myriad peer-reviewed papers demonstrating novel Artificial Intelligence (AI)-based solutions to COVID-19 challenges during the pandemic, few have made a significant clinical impact, especially in diagnosis and disease precision staging. One major cause for such low impact is the lack of model transparency, significantly limiting the AI adoption in real clinical practice. To solve this problem, AI models need to be explained to users. Thus, we have conducted a comprehensive study of Explainable Artificial Intelligence (XAI) using PRISMA technology. Our findings suggest that XAI can improve model performance, instill trust in the users, and assist users in decision-making. In this systematic review, we introduce common XAI techniques and their utility with specific examples of their application. We discuss the evaluation of XAI results because it is an important step for maximizing the value of AI-based clinical decision support systems. Additionally, we present the traditional, modern, and advanced XAI models to demonstrate the evolution of novel techniques. Finally, we provide a best practice guideline that developers can refer to during the model experimentation. We also offer potential solutions with specific examples for common challenges in AI model experimentation. This comprehensive review, hopefully, can promote AI adoption in biomedicine and healthcare.

2.
IEEE J Biomed Health Inform ; 25(7): 2376-2387, 2021 07.
Artículo en Inglés | MEDLINE | ID: covidwho-1328979

RESUMEN

Researchers seek help from deep learning methods to alleviate the enormous burden of reading radiological images by clinicians during the COVID-19 pandemic. However, clinicians are often reluctant to trust deep models due to their black-box characteristics. To automatically differentiate COVID-19 and community-acquired pneumonia from healthy lungs in radiographic imaging, we propose an explainable attention-transfer classification model based on the knowledge distillation network structure. The attention transfer direction always goes from the teacher network to the student network. Firstly, the teacher network extracts global features and concentrates on the infection regions to generate attention maps. It uses a deformable attention module to strengthen the response of infection regions and to suppress noise in irrelevant regions with an expanded reception field. Secondly, an image fusion module combines attention knowledge transferred from teacher network to student network with the essential information in original input. While the teacher network focuses on global features, the student branch focuses on irregularly shaped lesion regions to learn discriminative features. Lastly, we conduct extensive experiments on public chest X-ray and CT datasets to demonstrate the explainability of the proposed architecture in diagnosing COVID-19.


Asunto(s)
COVID-19/diagnóstico por imagen , Aprendizaje Profundo , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Humanos , Pulmón/diagnóstico por imagen , SARS-CoV-2
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA